The Plant Phenome Journal
○ Wiley
All preprints, ranked by how well they match The Plant Phenome Journal's content profile, based on 14 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Xu, R.; Ferguson, J. N.; Kromdijk, J.; Nikoloski, Z.
Show abstract
Hyperspectral reflectance provides rapid and precise phenotyping of plants in a non-destructive manner both in field and well-controlled settings. The resulting data have been used to devise machine learning (ML) models for paired measurements of different traits in diverse plants and crops. Yet, despite advances in using of hyperspectral data to reliably predict crop traits of interest, there are pressing issues concerning the training of ML models, the aggregation of data from crop field trials, and the generalizability of the models in different prediction settings. We collected hyperspectral reflectance data along with 25 anatomical, gas exchange, and chlorophyll fluorescence traits from 320 recombinant inbred lines of a maize Multi-Parent Advanced Generation Inter-Cross population grown across three consecutive seasons. We use these data to systematically: (1) compare the performance of representative ML models for different traits, including slow fluorescence kinetics whose predictability by hyperspectral data has not yet been investigated, (2) evaluate the ML model performance in prediction scenarios concerning unseen genotypes, unseen seasons, and the combination thereof, (3) investigate the effects of data aggregation of ML model performance. These problems are addressed in a rigorous nested cross-validation setting that provides a template for adequate assessment of performance of ML models for diverse crop traits considering the particularities of the experimental design. Significance statementWe present a comprehensive evaluation of hyperspectral reflectance for predicting 25 physiological traits, including fluorescence kinetics, in maize across three seasons using rigorous nested cross-validation. By comparing ML models, prediction scenarios, and data aggregation strategies, our study reveals trait-specific limits of generalizability and offers a robust framework for deploying hyperspectral data in breeding applications.
Tirado, S. B.; St Dennis, S.; Enders, T. A.; Springer, N. M.
Show abstract
There is significant enthusiasm about the potential for hyperspectral imaging to document variation among plant species, genotypes or growing conditions. However, in many cases the application of hyperspectral imaging is performed in highly controlled situations that focus on a flat portion of a leaf or side-views of plants that would be difficult to obtain in field settings. We were interested in assessing the potential for applying hyperspectral imaging to document variation in genotypes or abiotic stresses in a fashion that could be implemented in field settings. Specifically, we focused on collecting top-down hyperspectral images of maize seedlings similar to a view that would be collected in a typical maize field. A top-down image of a maize seedling includes a view into the funnel-like whorl at the center of the plant with several leaves radiating outwards. There is substantial variability in the reflectance profile of different portions of this plant. To deal with the variability in reflectance profiles that arises from this morphology we implemented a method that divides the longest leaf into 10 segments from the center to the leaf tip. We show that using these segments provides improved ability to discriminate different genotypes or abiotic stress conditions (heat, cold or salinity stress) for maize seedlings. We also found substantial differences in the ability to successfully classify abiotic stress conditions among different inbred genotypes of maize. This provides an approach that can be implemented to help classify genotype and environmental variation for maize seedlings that could be implemented in field settings. Significance StatementThis study describes the importance of using spatial information for the analysis of hyperspectral images of maize seedling. The segmentation of maize seedling leaves provides improved resolution for using hyperspectral variation to document genotypic and environmental variation in maize.
Miao, C.; Hoban, T. P.; Pages, A.; Xu, Z.; Rodene, E.; Ubbens, J.; Stavness, I.; Yang, J.; Schnable, J. C.
Show abstract
Automatically scoring plant traits using a combination of imaging and deep learning holds promise to accelerate data collection, scientific inquiry, and breeding progress. However, applications of this approach are currently held back by the availability of large and suitably annotated training datasets. Early training datasets targeted arabidopsis or tobacco. The morphology of these plants quite different from that of grass species like maize. Two sets of maize training data, one real-world and one synthetic were generated and annotated for late vegetative stage maize plants using leaf count as a model trait. Convolutional neural networks (CNNs) trained on entirely synthetic data provided predictive power for scoring leaf number in real-world images. This power was less than CNNs trained with equal numbers of real-world images, however, in some cases CNNs trained with larger numbers of synthetic images outperformed CNNs trained with smaller numbers of real-world images. When real-world training images were scarce, augmenting real-world training data with synthetic data provided improved prediction accuracy. Quantifying leaf number over time can provide insight into plant growth rates and stress responses, and can help to parameterize crop growth models. The approaches and annotated training data described here may help future efforts to develop accurate leaf counting algorithms for maize.
Feldman, M. J.; Park, J.; Miller, N.; Wakholi, C.; Greene, K.; Abbasi, A.; Rippner, D. A.; Navarre, D.; Carley, C. S.; Shannon, L. M.; Novy, R.
Show abstract
Tuber size, shape, colorimetric characteristics, and defect susceptibility are all factors that influence the acceptance of new potato cultivars. Despite the importance of these characteristics, our understanding of their inheritance is substantially limited by our inability to precisely measure these features quantitatively on the scale needed to evaluate breeding populations. To alleviate this bottleneck, we developed a low-cost, semi-automated workflow to capture data and measure each of these characteristics using machine vision. This workflow was applied to assess the phenotypic variation present within 189 F1 progeny of the A08241 breeding population. Our results provide an example of quantitative measurements acquired using machine vision methods that are reliable, heritable, and can be used to understand and select upon multiple traits simultaneously in structured potato breeding populations.
Siebers, M. H.; Fu, P.; Long, S. P.; McGrath, J. M.; Bernacchi, C. J.
Show abstract
Stand count, the number of plants per unit ground area, and leaf area index (LAI), the ratio of leaf area to ground area, are critical traits for crop research but are traditionally measured using labor-intensive methods. While new sensing technologies are being developed, quantifying improvement in measurement efficiency and data quality, relative to traditional techniques, is lacking. In this study, we use LiDAR to generate 3D scans of corn and soybean plots and evaluate two computational methods: a gap fraction approach to estimate LAI and a persistent homology algorithm to estimate stand count by detecting structural peaks in the canopy. Validation experiments and statistical comparisons of bias and variance demonstrate that LiDAR-derived LAI estimates in corn are comparable in quality to those from established instruments. However, in soybean, the LiDAR method performs poorly, likely due to dense canopies limiting light penetration and structural differentiation. Stand count estimations in corn closely match manual counts, with the added benefit of full-plot coverage and significantly faster data collection. In soybean, stand count estimates are unreliable under dense canopy conditions. These results offer practical guidance for the use of LiDAR in field phenotyping and highlight both its current capabilities and limitations. While a trade-off between speed and precision remains, particularly in high-density canopies, LiDARs scalability and multi-trait potential make it a promising tool for high-throughput breeding programs. Continued improvements in LiDAR hardware and algorithm design may further enhance measurement accuracy and extend applicability across crops and growth stages.
DeSalvio, A. J.; Matabuena, M.; Adak, A.; Arik, M. A.; DeSalvio, S. M.; Murray, S. C.; Wong, R. K. W.; Edwards, J.; de Leon, N.; Kaeppler, S. M.; Lima, D. C.; Hirsch, C. N.; Thompson, A.; Stelly, D. M.
Show abstract
Genomic and phenomic analyses suggest additional heritable phenomic features can improve modeling of important end traits like senescence or yield. Field phenotyping generally uses trait values averaged across individual experimental units (plants or numerous plants within plots), ignoring the full distributional pattern of collected measures. Images of plants or plots, as captured by drones (unoccupied aerial vehicles / UAVs / drones), can be viewed as individual distribution functions that capture biological information. This study introduces and validates distributional data analysis in two crops and experiment types - cotton (Gossypium hirsutum L.) single plant vegetation index (VI) analysis and maize (Zea mays L.) plot-level yield predictions. In both crops, the concept of within-day variance decomposition was demonstrated. In cotton, genotypes exerted significant influences on temporal quantile functions of VIs. Maize yield prediction using distributional data with elastic-net regression indicated improvements in yield prediction between 12.7%-21.6% with quantiles outside the conventionally used median responsible for added predictive power. A novel data visualization method for per-pixel heritability allowed distributional features to be explainable and interpretable. These results have implications for future plant phenomic studies, indicating that distributional data analysis applied across temporal imagery captures novel, heritable, and interpretable biological signal that is lost when working with conventional measures of central tendency such as mean or median summary values of experimental units. SignificanceRepeated aerial imaging of agricultural experiments produces image data sets that capture plant development in high spatial and temporal resolutions. Frequently, images are summarized by measures of central tendency, such as mean or median values. Here, functional data distributional methods were applied to cotton (Gossypium hirsutum L.) and maize (Zea mays L.) image data, capturing more information than standard approaches. Cotton genotypes significantly impacted distributional spectral data while in maize, distributional data enabled more accurate predictions of grain yield versus models trained with median data alone. Distributional data were more explainable by genetics, with novel data visualization techniques able to shine light on specific parts of plant imagery with high and low genetic variance.
Brown, K.; Schuhl, H.; Srivastava, D.; Beyene, G.; Li, M.; Fahlgren, N.; Murphy, K. M.
Show abstract
Lodging is a major contributor to decreased yield in tef, a staple cereal crop in Ethiopia. Semidwarf varieties have been developed with a goal to increase yield through reduced lodging, but studying lodging susceptibility currently requires a labor-intensive, imprecise, manual scoring method. Here we present workflows for analyzing tef stand height from UAS sensors across time to both predict lodging later in the season with early height and to measure the severity of lodging after a storm event. We compare 3D point clouds generated by photogrammetry from RGB images with those generated from LiDAR to estimate height, demonstrating that they produce similar results, despite differences in cost. Stand height and lodging can both be accurately measured with low-cost UAS, reducing the need for manual measurements and increasing precision and temporal resolution in plant breeding programs. Significance StatementExtreme weather or heavy grain can cause plant stems to bend, a process called lodging. Lodging significantly reduces crop yields globally, particularly in grain crops such as tef (Eragrostis tef). Semidwarf crops have previously been reported to be lodging-resistant, increasing crop yields. Here, we used uncrewed aerial systems (UAS) to measure plant growth, height, and lodging in gene edited semidwarf tef lines, and compared the results to ground-truth data. Using a UAS equipped with a red-green-blue (RGB) camera or LiDAR sensor, we measured plant height and lodging, and found that early-season height measurements could predict future lodging potential. The tools used were contributed to the open-source software PlantCV-Geospatial for community use. This work contributes to a broader understanding of genetic resistance to lodging, providing valuable insights for tef crop improvement and reduces the need for labor-intensive manual measurements.
Orvati Nia, F.; Peeples, J.; Murray, S. C.; McFarland, A.; Vann, T.; Salehi, S.; Hardin, R.; Baltensperger, D. D.; Ibrahim, A.; Thomasson, J. A.; Fadamiro, H.; Subramanian, N. K.; Oladepo, N.; Vysyaraju, U.
Show abstract
Advances in automation, imaging, and artificial intelligence have enabled researchers to capture large volumes of high-quality plant data for understanding crop growth, stress, and genotype-by-environment interactions. While genomics has achieved remarkable throughput, phenotypic data acquisition remains a critical bottleneck for accelerating crop improvement and biological discovery. To address this challenge, an integrated multispectral phenotyping framework was developed using imagery from the Texas A&M AgriLife Precision Automated Phenotyping Greenhouse, a fully controlled facility designed for reproducible plant monitoring throughout the entire growth cycle of most crops. The framework expands the Plant Growth and Phenotyping (PGP v2) dataset and establishes a standardized system for continuous image acquisition, segmentation, deep feature extraction, and temporal analysis across multiple crop species. The project was organized around five coordinated areas: Administration and Coordination, Imaging and Sensor Operations, Data Processing and Management, Artificial Intelligence and Analytics, and Plant Science and Discovery. This structure ensured consistent data quality, version-controlled workflows, and communication across disciplines. The analytical pipeline integrates pseudo-RGB generation, deep learning-based detection and segmentation, image stitching, and temporal (longitudinal) tracking to isolate individual plants and analyze changes in morphology, spectral reflectance, and texture over time. Beyond technical innovation, the framework provides a replicable model for interdisciplinary collaboration and administrative integration in plant phenomics. The combined dataset, workflow, and management framework enable scalable, reproducible, and data-driven plant science research that bridges engineering and biological discovery. Plain Language SummaryTemporal imaging of plants in controlled environments helps scientists better understand growth and biological processes. However, analyzing large volumes of images has been limited by a lack of automated tools. Multispectral imagery captures additional information about plant pigments, structure, and stress beyond standard color images. We developed an automated analysis pipeline that identifies individual plants, tracks their growth over time, and measures traits such as height, area, shape, texture, and vegetation indices. Using artificial intelligence, the system efficiently processes thousands of images to provide consistent and repeatable measurements. By integrating engineering and plant biology, this work supports data-driven decisions for crop improvement and agricultural research.
Cooper, J.; Sweet, D. D.; Tirado, S. B.; Springer, N. M.; Hirsch, C. N.; Hirsch, C. D.
Show abstract
Canopy cover is an important agronomic trait influencing photosynthesis, weed suppression, biomass accumulation, and yield. Conventional methods to quantify canopy cover are time and labor-intensive. As such, little is known about how canopy cover develops over time, the stability of canopy cover across environments, or the genetic architecture of canopy cover. We used unoccupied aerial vehicle-mediated image capture to quantify plot-level canopy coverage in maize throughout the growing season. Images of 501 diverse inbred lines were acquired between 300 and 1300 growing degree days in the 2018-2021 growing seasons. We observed that the maize canopy developed following a logistic curve. Phenotypic variation in percent canopy coverage and canopy growth rate was explained by genetic and environmental factors and genotype-by-environment interactions, however the percent of variance explained by each factor varied throughout the growing season. Environmental factors explained the largest portion of trait variance during the adult vegetative growth stage and had a larger impact on canopy growth rates than percent canopy coverage. We conducted multiple genome wide association studies and found that canopy cover is a complex, polygenic trait with a diverse range of marker trait associations throughout development. The change in associations indicated that single time point phenotyping was insufficient to capture the full phenomic and genetic diversity of canopy cover in maize.
Morales, N.; Anche, M. T.; Kaczmar, N. S.; Lepak, N.; Ni, P.; Romay, M. C.; Santantonio, N.; Buckler, E. S.; Gore, M. A.; Mueller, L. A.; Robbins, K. R.
Show abstract
Design randomizations and spatial corrections have increased understanding of genotypic, spatial, and residual effects in field experiments, but precisely measuring spatial heterogeneity in the field remains a challenge. To this end, our study evaluated approaches to improve spatial modeling using high-throughput phenotypes (HTP) via unoccupied aerial vehicle (UAV) imagery. The normalized difference vegetation index (NDVI) was measured by a multi-spectral MicaSense camera and ImageBreed. Contrasting to baseline agronomic trait spatial correction and a baseline multi-trait model, a two-stage approach that quantified NDVI local environmental effects (NLEE) was proposed. Firstly, NLEE were separated from additive genetic effects over the growing season using two-dimensional spline (2DSpl), separable autoregressive (AR1) models, or random regression models (RR). Secondly, the NLEE were leveraged within agronomic trait genomic best linear unbiased prediction (GBLUP) either modeling an empirical covariance for random effects, or by modeling fixed effects as an average of NLEE across time or split among three growth phases. Modeling approaches were tested using simulation data and Genomes-to-Fields (G2F) hybrid maize (Zea mays L.) field experiments in 2015, 2017, 2019, and 2020 for grain yield, grain moisture, and ear height. The two-stage approach improved heritability, model fit, and genotypic effect estimation compared to all baseline models. Electrical conductance and elevation from a 2019 soil survey significantly improved model fit, while 2DSpl NLEE were most correlated to the soil parameters and grain yield 2DSpl effects. Simulation of field effects demonstrated improved specificity for RR models. In summary, NLEE increased experimental accuracy and understanding of field spatio-temporal heterogeneity.
Zenkl, R.; McDonald, B. A.; Anderegg, J.
Show abstract
1Accurate quantification of plant disease is essential for resistance breeding, variety testing, and precision agriculture, yet visual ratings are limited by subjectivity, low precision, and restricted throughput. Image-based phenotyping can address these limitations, but field applications face substantial challenges due to spatial heterogeneity, symptom-level diagnostic requirements, and the need for very high-resolution imagery with limited spatial coverage. This introduces a fundamental trade-off: high-resolution images provide precise local measurements of disease, but spot-level estimates can be highly variable within experimental units. We analyzed a large image data set of wheat foliar diseases to characterize the distribution, spatial dependence, and aggregation behavior of spot-level severity estimates in plots. We combined high-resolution macro-scale imaging with focus bracketing to increase the sampled leaf area. Our results highlight focus bracketing as a promising approach for simultaneous diagnosis and quantification of disease in field plots. Autocorrelation in severity estimates both within focal image stacks and across plot positions was comparable, with 10 focal stack images or 10 positions per plot contributing approximately 2.5 independent observations each. Modeling plot-level severity as a latent Beta-distributed variable enabled robust estimation of mean severity and associated uncertainty. This supports both hypothesis testing and efficient sampling across the full range of disease severity associated with genotypic diversity and seasonality of developing epidemics. The proposed imaging approach is non-invasive and, in principle, transferrable to autonomous ground-based phenotyping platforms, offering the potential to shift the dominant source of uncertainty in estimating disease severity from measurement-related limitations toward biologically and environmentally driven variability in disease expression.
Meyering, B.; Schlautman, B.
Show abstract
Modern, conventional row crop agricultural production relies on clean tillage of croplands and bare soil during the dormant season. While this paradigm of crop production has undoubtedly led to great increases in grain yields and efficiency, it has also resulted in significant soil erosion, groundwater contamination, degradation of local ecology, and hypoxic deadzones in US watersheds. Cover cropping with perennial plant species has been proposed as a way to mitigate these negative effects of crop production while having a minimum impact on crop yields. Measuring establishment of these perennial groundcovers (PGC) in research trials is subjective, tedious, and time-consuming when calculated with traditional methods whereas image based analyses are objective, efficient, and reproducible. For this project we have developed a deep learning approach using state of the art CNN architectures to estimate PGC establishment in research plots using a variety of open-source and internal image datasets. Our novel approach uses region of interest (ROI) markers in the field, to bound the predictions which improves upon other methods. We deployed the models on AWS Sagemaker serverless endpoints, and built a lightweight Django web application to host the images and inference services. Researchers will be able to acquire plot images with smartphone cameras and get fast, reliable data from their research trials using this "Local Sensing" data collection approach. We envision that this framework can be used by other researchers and growers as PGC adoption spreads throughout the Midwestern crop production areas.
Mothukuri, S. R.; Massey-Reed, S. R.; Potgieter, A.; Laws, K.; Hunt, C.; Amuzu-Aweh, E. N.; Cooper, M.; Mace, E.; Jordan, D.
Show abstract
Lodging in sorghum presents a significant challenge for plant breeders due to the trade-off between lodging resistance and grain yield. Manually measuring lodging across thousands of plots is time-consuming, expensive, and error-prone, making selection for lodging resistance challenging in breeding programs. Unmanned Aerial Vehicle (UAV) derived metrics offer a potential high-throughput, cost-effective alternative for lodging phenotyping. This study developed a framework for predicting plot-level lodging from UAV imagery across 2,675 sorghum breeding plots. Multi-temporal canopy height data were collected at two critical time points: maximum crop height and at manual lodging assessment. Height percentiles were extracted from UAV derived point clouds generated using photogrammetric algorithms. These data were used to develop parametric, non-parametric, and ensemble prediction models, which were evaluated using three statistical metrics. The ensemble model, averaging predictions from all models, achieved the highest accuracy with Pearson correlations of r = 0.80-0.84 and lowest residual mean square error (RMSE=16-18), explaining 64-70% of variation in manual lodging counts. Model diagnostics and iterative refinement, including inspection of UAV imagery and dataset curation, had minimal impact on model performance, demonstrating the robustness of the approach. Model performance was consistent across sites, with minimal effects of stratified sampling on accuracy, confirming the ensemble approach as optimal for plot-level lodging assessment. This study demonstrates that integrated multi-temporal UAV imagery offers a practical alternative to labor-intensive manual evaluation methods by enabling high-throughput lodging assessment suitable for implementation in sorghum breeding programs.
Sangjan, W.; Pukrongta, N.; Buchanan, T.; Carter, A. H.; Pumphrey, M. O.; Sankaran, S.
Show abstract
Continuous, high-frequency monitoring is essential to capture rapid phenological transitions and dynamic crop responses to the environment. However, most phenotyping platforms lack the temporal resolution and automation required for consistent, season-long trait assessment. This study introduces AGIcam, an open-source IoT camera system for automated and continuous in-field plant phenotyping and yield prediction. The platform integrates solar-powered Raspberry Pi units with a modular software stack, comprising Node-RED, InfluxDB, Grafana, and Microsoft Azure, for automated data acquisition, transfer, and visualization. In the 2022 growing season, 18 AGIcam systems were deployed in spring and winter wheat breeding trials, maintaining an uptime of over 85% while capturing frequent RGB and NoIR imagery. Time-series vegetation indices derived from these images were used to predict yield using random forest and Long Short-Term Memory (LSTM) models. The LSTM approach achieved the highest accuracy approximately one week after heading, with mean prediction errors of 3.41% for spring wheat and 1.62% for winter wheat. These results highlight the potential of IoT-based platforms such as AGIcam to enable real-time, scalable, and effective phenotyping solutions for data-driven crop improvement.
DeSalvio, A. J.; Adak, A.; Murray, S. C.; Jarquin, D.; Winans, N.; Crozier, D.; Rooney, W.
Show abstract
For nearly two decades, genomic selection has supported efforts to increase genetic gains in plant and animal improvement programs. However, novel phenomic strategies helping to predict complex traits in maize have proven beneficial when integrated into across- and within-environment genomic prediction models. One phenomic data modality is near infrared spectroscopy (NIRS), which records reflectance values of biological samples (e.g., maize kernels) based on chemical composition. Predictions of seven maize agronomic traits and three kernel composition traits across two years (2011-2012) and two management conditions (water stressed and well-watered) were conducted using combinations of NIRS and genomic data within four different cross-validation prediction scenarios. In aggregate, models incorporating NIRS data alongside genomic data improved predictive ability over models using only genomic data in 5 of 28 trait/cross-validation scenarios for across-environment prediction and 15 of 28 trait/environment scenarios for within-environment prediction, while the model with NIRS data alone had the highest prediction ability in only 1 of 28 scenarios for within-environment prediction. Potential causes of the surprisingly lower phenomic than genomic prediction power in this study are discussed, including sample size, sample homogenization, and low GxE. A genome-wide association study (GWAS) implicated known (i.e., MADS69, ZCN8, sh1, wx1, du1) and unknown candidate genes linked to plant height and flowering-related agronomic traits as well as compositional traits such as kernel protein and starch content. This study demonstrated that including NIRS with genomic markers is a viable method to predict multiple complex traits with improved predictive ability and elucidate underlying biological causes. Key messageGenomic and NIRS data from a maize diversity panel were used for prediction of agronomic and kernel composition traits while uncovering candidate genes for kernel protein and starch content.
Walsh, J. J.; Gorgu, L.; Cavel, E.; Poulain, V.; Gutierrez, L.; Mangina, E.; Negrao, S.
Show abstract
Plant phenotyping systematically quantifies plant traits such as growth, morphology, physiology, or yield, assessing genetic and environmental influences on plant performance. The integration of advanced phenotyping technologies, including imaging sensors and data analytics, facilitates the non-destructive and longitudinal acquisition of high-throughput data. Nevertheless, the sheer volume of such phenotyping data introduces significant challenges for researchers, particularly related to data processing. To overcome these challenges, researchers are turning to artificial intelligence (AI), a tool that can autonomously process and learn from large amounts of data. Despite this advantage, accurate image segmentation remains a key hurdle due to the complexity of plant morphology and environmental noise. In this study, we present the Botanical Spectrum Analyser (BSA), a user-friendly graphical user interface (GUI) that integrates a modified U-Net deep neural network for plant image segmentation. Designed for accessibility, BSA enables non-technical users to apply advanced AI segmentation to RGB and hyperspectral (VNIR and SWIR) imagery. We evaluated BSAs performance across three case studies involving wheat, barley, and Arabidopsis, demonstrating its robustness across species and imaging modalities. Our results show that BSA achieves an average accuracy of 99.7%, with F1-scores consistently exceeding 98% and strong Jaccard and recall performance across datasets. For challenging root segmentation tasks, BSA outperformed commercial algorithms, achieving a 76% F1-score compared to 24%, representing a 50% improvement. These results highlight the adaptability of the BSA framework for diverse phenotyping scenarios, bridging the gap between advanced deep learning methods and accessible plant science applications.
Vong, G. Y. W.; Scott, P.; Claydon, W.; Daff, J.; Denby, K.; Ezer, D.
Show abstract
BackgroundAdvances in LED lighting technologies have allowed researchers to explore increasingly complex light regimes. This has given us greater insight into plants responses to dynamic light, including seasonality and fluctuating conditions, rather than the discrete (i.e. on / off) lighting previously explored. However, there is a need for methods to accurately program multi-channel / waveband LED lighting systems. ResultsWe present a multi-step, multidimensional algorithm to accurately program LED lights. This algorithm accounts for non-linearity between intensity settings and irradiance output, as well as bleedthrough between channels of different wavebands. Our algorithm out-performs other methods which treat waveband channels as independent variables, more accurately predicting intensity settings to achieve a desired irradiance when using multiple LED channels. ConclusionsThis algorithm allows the community to accurately program complex light regimes to probe plant responses to dynamically changing light spectra. We have made this algorithm available to the plant science community as an R package, LightFitR (available on GitHub at: https://github.com/ginavong/LightFitR).
Zenkl, R.; McDonald, B. A.; Walter, A.; Anderegg, J.
Show abstract
1Reliable, quantitative information on the presence and severity of crop diseases is critical for site-specific crop management and resistance breeding. Successful analysis of leaves under naturally variable lighting, presenting multiple disorders, and across phenological stages is a critical step towards high-throughput disease assessments directly in the field. Here, we present a dataset comprising 422 high resolution images of flattened leaves captured under variable outdoor lighting with polygon annotations of leaves, leaf necrosis and insect damage as well as point annotations of Septoria tritici blotch (STB) fruiting bodies (pycnidia) and rust pustules. Based on this dataset, we demonstrate the capability of deep learning for keypoint detection of pycnidia (F 1 = 0.76) and rust pustules (F 1 = 0.77) combined with semantic segmentation of leaves (IoU = 0.96), leaf necrosis (IoU = 0.77) and insect damage(IoU = 0.69) to reliably detect and quantify the presence of STB, leaf rusts, and insect damage under natural outdoor conditions. An analysis of intra- and inter-annotator agreement on selected images demonstrated that the proposed method achieved a performance close to that of annotators in the majority of the scenarios. We validated the generalization capabilities of the proposed method by testing it on images of unstructured canopies acquired directly in the field and with-out manual interaction with single leaves. The corresponding imaging procedure can be adapted to support automated data acquisition. Model predictions were in good agreement with visual assessments of in-focus regions in these images, despite the presence of new challenges such as variable orientation of leaves and more complex lighting. This underscores the principle feasibility of diagnosing and quantifying the severity of foliar diseases under field conditions using the proposed imaging setup and image processing methods. By demonstrating the ability to diagnose and quantify the severity of multiple diseases in highly natural complex scenarios, we lay out the groundwork for a significantly more efficient, non-invasive in-field analysis of foliar diseases that can support resistance breeding and the implementation of core principles of precision agriculture.
Diepenbrock, C. H.; Tang, T.; Jines, M.; Technow, F.; Lira, S.; Podlich, D.; Cooper, M.; Messina, C. D.
Show abstract
Genetic gain in breeding programs depends on the predictive skill of genotype-to-phenotype algorithms and precision of phenotyping, both integrated with well-defined breeding objectives for a target population of environments (TPE). The integration of physiology and genomics could improve predictive skill by capturing additive and non-additive interaction effects of genotype (G), environment (E), and management (M). Precision phenotyping at managed stress environments (MSEs) can elicit physiological expression of processes that differentiate germplasm for performance in target environments, thus enabling algorithm training. Gap analysis methodology enables design of GxM technologies for target environments by assessing the difference between current and attainable yields within physiological limits. Harnessing digital technologies such as crop growth model-whole genome prediction (CGM-WGP) and gap analysis, and MSEs, can hasten genetic gain by improving predictive skill and definition of breeding goals in the U.S. maize production TPE. A half-diallel maize experiment resulting from crossing 9 elite maize inbreds was conducted at 17 locations in the TPE and 6 locations at MSEs between 2017 and 2019. Analyses over 35 families represented by 2367 hybrids demonstrated that CGM-WGP offered a predictive advantage (y) compared to WGP that increased with occurrence of drought as measured by decreasing whole-season evapotranspiration (ET; log(y) = 0.80({+/-}0.6) - 0.006({+/-}0.001) x ET; r2 = 0.59; df = 21). Predictions of unobserved physiological traits using the CGM, akin to digital phenotyping, were stable. This understanding of germplasm response to ET enables predictive design of opportunities to close productivity gaps. We conclude that enabling physiology through digital methods can hasten genetic gain by improving predictive skill and defining breeding objectives bounded by physiological realities.
Loayza, H.; Ninanya, J.; Palacios, S.; Silva, L.; Pujaico Rivera, F.; Rinza, J.; Gastelo, M.; Aponte, M.; Kreuze, J. F.; Lindqvist-Kreuze, H.; Heider, B.; Kante, M.; Ramirez, D. A.
Show abstract
Potato (Solanum tuberosum L.) is a staple crop crucial to global food security, yet its production is severely threatened by late blight (LB), caused by Phytophthora infestans, one of the most destructive plant diseases worldwide. Breeding programs for LB resistance have traditionally relied on labor-intensive and subjective visual assessments, which limit scalability and consistency, particularly in early-generation trials. Unmanned aerial vehicle (UAV)-based remote sensing combined with machine learning (ML) offers a promising alternative for objective, high-throughput disease phenotyping. This study evaluated the potential of UAV-derived multispectral imagery and ML techniques to estimate LB severity across large and genetically diverse potato breeding populations, comprising 2,745 clones in one trial and 492 accessions in another, conducted in Oxapampa, Pasco, Peru. We compared vegetation index-based approaches with a machine learning framework that integrates K-means clustering and Kernel Ridge Regression (KRR) and assessed their ability to capture genotypic variation and support selection decisions. NDVI consistently showed a strong correlation with visually assessed LB severity, particularly at advanced stages of disease development, enabling objective discrimination between healthy and diseased canopy tissues. However, the KRR-based approach outperformed linear NDVI-based models by capturing nonlinear relationships between spectral responses and disease progression. Estimates of LB severity derived from NDVI and KRR models, expressed as best linear unbiased estimates (BLUEs), showed strong and biologically consistent relationships with the area under the disease progress curve (AUDPC), particularly during later UAV acquisitions. Selection coincidence between UAV-derived estimates and AUDPC-based rankings was substantially higher at intermediate to advanced stages of disease progression, suggesting that UAV assessments at these stages may capture sufficient phenotypic variation to distinguish genotypes. These findings indicate that UAV-based multispectral phenotyping, especially when integrated with ML, provides a practical and scalable approach for assessing LB severity in potato breeding programs while reducing the need for time-consuming field evaluations.